Close

%0 Conference Proceedings
%4 sid.inpe.br/sibgrapi/2021/09.06.22.34
%2 sid.inpe.br/sibgrapi/2021/09.06.22.34.57
%@doi 10.1109/SIBGRAPI54419.2021.00011
%T Training Deep Networks from Zero to Hero: avoiding pitfalls and going beyond
%D 2021
%A Ponti, Moacir Antonelli,
%A Santos, Fernando Pereira dos,
%A Ribeiro, Leo Sampaio Ferraz,
%A Cavallari, Gabriel Biscaro,
%@affiliation Universidade de São Paulo 
%@affiliation Universidade de São Paulo 
%@affiliation Universidade de São Paulo 
%@affiliation Universidade de São Paulo
%E Paiva, Afonso ,
%E Menotti, David ,
%E Baranoski, Gladimir V. G. ,
%E Proença, Hugo Pedro ,
%E Junior, Antonio Lopes Apolinario ,
%E Papa, João Paulo ,
%E Pagliosa, Paulo ,
%E dos Santos, Thiago Oliveira ,
%E e Sá, Asla Medeiros ,
%E da Silveira, Thiago Lopes Trugillo ,
%E Brazil, Emilio Vital ,
%E Ponti, Moacir A. ,
%E Fernandes, Leandro A. F. ,
%E Avila, Sandra,
%B Conference on Graphics, Patterns and Images, 34 (SIBGRAPI)
%C Gramado, RS, Brazil (virtual)
%8 18-22 Oct. 2021
%I IEEE Computer Society
%J Los Alamitos
%S Proceedings
%K Deep Learning, Convolutional Networks, Survey, Training.
%X Training deep neural networks may be challenging in real world data. Using models as black-boxes, even with transfer learning, can result in poor generalization or inconclusive results when it comes to small datasets or specific applications. This tutorial covers the basic steps as well as more recent options to improve models, in particular, but not restricted to, supervised learning. It can be particularly useful in datasets that are not as well-prepared as those in challenges, and also under scarce annotation and/or small data. We describe basic procedures as data preparation, optimization and transfer learning, but also recent architectural choices such as use of transformer modules, alternative convolutional layers, activation functions, wide/depth, as well as training procedures including curriculum, contrastive and self-supervised learning.
%@language en
%3 2021_sibgrapi__tutorial_CR.pdf


Close